27 research outputs found
Atomic-Scale Investigation of Graphene Grown on Cu Foil and the Effects of Thermal Annealing
We have investigated the effects of thermal annealing on ex-situ chemically vapor deposited submonolayer graphene islands on polycrystalline Cu foil at the atomic-scale using ultrahigh vacuum scanning tunneling microscopy. Low-temperature annealed graphene islands on Cu foil (at similar to 430 degrees C) exhibit predominantly striped Moire patterns, indicating a relatively weak interaction between graphene and the underlying polycrystalline Cu foil. Rapid high-temperature annealing of the simple (at 700-800 degrees C) gives rise to the removal of Cu oxide and the recovery of crystallographic features of the copper that surrounds the intact graphene. These experimental observations of continuous crystalline features between the underlying copper (beneath the graphene islands). and the surrounding exposed copper areas revealed by high-temperature annealing demonstrates the impenetrable nature of graphene and its potential application as a protective layer against corrosion
Electronic Transport in Chemical Vapor Deposited Graphene Synthesized on Cu: Quantum Hall Effect and Weak Localization
We report on electronic properties of graphene synthesized by chemical vapor
deposition (CVD) on copper then transferred to SiO2/Si. Wafer-scale (up to 4
inches) graphene films have been synthesized, consisting dominantly of
monolayer graphene as indicated by spectroscopic Raman mapping. Low temperature
transport measurements are performed on micro devices fabricated from such CVD
graphene, displaying ambipolar field effect (with on/off ratio ~5 and carrier
mobilities up to ~3000 cm^2/Vs) and "half-integer" quantum Hall effect, a
hall-mark of intrinsic electronic properties of monolayer graphene. We also
observe weak localization and extract information about phase coherence and
scattering of carriers.Comment: shortened version, published on APL. See version 1 for more Raman
data
Direct Imaging of Graphene Edges: Atomic Structure and Electronic Scattering
We report an atomically-resolved scanning tunneling microscopy (STM)
investigation of the edges of graphene grains synthesized on Cu foils by
chemical vapor deposition (CVD). Most of the edges are macroscopically parallel
to the zigzag directions of graphene lattice. These edges have microscopic
roughness that is found to also follow zigzag directions at atomic scale,
displaying many ~120 degree turns. A prominent standing wave pattern with
periodicity ~3a/4 (a being the graphene lattice constant) is observed near a
rare-occurring armchair-oriented edge. Observed features of this wave pattern
are consistent with the electronic intervalley backscattering predicted to
occur at armchair edges but not at zigzag edges
Control and Characterization of Individual Grains and Grain Boundaries in Graphene Grown by Chemical Vapor Deposition
The strong interest in graphene has motivated the scalable production of high
quality graphene and graphene devices. Since large-scale graphene films
synthesized to date are typically polycrystalline, it is important to
characterize and control grain boundaries, generally believed to degrade
graphene quality. Here we study single-crystal graphene grains synthesized by
ambient CVD on polycrystalline Cu, and show how individual boundaries between
coalescing grains affect graphene's electronic properties. The graphene grains
show no definite epitaxial relationship with the Cu substrate, and can cross Cu
grain boundaries. The edges of these grains are found to be predominantly
parallel to zigzag directions. We show that grain boundaries give a significant
Raman "D" peak, impede electrical transport, and induce prominent weak
localization indicative of intervalley scattering in graphene. Finally, we
demonstrate an approach using pre-patterned growth seeds to control graphene
nucleation, opening a route towards scalable fabrication of single-crystal
graphene devices without grain boundaries.Comment: New version with additional data. Accepted by Nature Material
The Human Activity Radar Challenge: benchmarking based on the ‘Radar signatures of human activities’ dataset from Glasgow University
Radar is an extremely valuable sensing technology for detecting moving targets and measuring their range, velocity, and angular positions. When people are monitored at home, radar is more likely to be accepted by end-users, as they already use WiFi, is perceived as privacy-preserving compared to cameras, and does not require user compliance as wearable sensors do. Furthermore, it is not affected by lighting condi-tions nor requires artificial lights that could cause discomfort in the home environment. So, radar-based human activities classification in the context of assisted living can empower an aging society to live at home independently longer. However, challenges remain as to the formulation of the most effective algorithms for radar-based human activities classification and their validation. To promote the exploration and cross-evaluation of different algorithms, our dataset released in 2019 was used to benchmark various classification approaches. The challenge was open from February 2020 to December 2020. A total of 23 organizations worldwide, forming 12 teams from academia and industry, participated in the inaugural Radar Challenge, and submitted 188 valid entries to the challenge. This paper presents an overview and evaluation of the approaches used for all primary contributions in this inaugural challenge. The proposed algorithms are summarized, and the main parameters affecting their performances are analyzed
The Human Activity Radar Challenge: benchmarking based on the ‘Radar signatures of human activities’ dataset from Glasgow University
Radar is an extremely valuable sensing technology for detecting moving targets and measuring their range, velocity, and angular positions. When people are monitored at home, radar is more likely to be accepted by end-users, as they already use WiFi, is perceived as privacy-preserving compared to cameras, and does not require user compliance as wearable sensors do. Furthermore, it is not affected by lighting condi-tions nor requires artificial lights that could cause discomfort in the home environment. So, radar-based human activities classification in the context of assisted living can empower an aging society to live at home independently longer. However, challenges remain as to the formulation of the most effective algorithms for radar-based human activities classification and their validation. To promote the exploration and cross-evaluation of different algorithms, our dataset released in 2019 was used to benchmark various classification approaches. The challenge was open from February 2020 to December 2020. A total of 23 organizations worldwide, forming 12 teams from academia and industry, participated in the inaugural Radar Challenge, and submitted 188 valid entries to the challenge. This paper presents an overview and evaluation of the approaches used for all primary contributions in this inaugural challenge. The proposed algorithms are summarized, and the main parameters affecting their performances are analyzed
Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-Configuration
The development of a self-configuring method for efficiently locating moving targets indoors could enable extraordinary advances in the control of industrial automatic production equipment. Being interactively connected, cameras that constitute a network represent a promising visual system for wireless positioning, with the ultimate goal of replacing or enhancing conventional sensors. Developing a highly efficient algorithm for collaborating cameras in the network is of particular interest. This paper presents an intelligent positioning system, which is capable of integrating visual information, obtained by large quantities of cameras, through self-configuration. The use of the extended Kalman filter predicts the position, velocity, acceleration and jerk (the third derivative of position) in the moving target. As a result, the camera-network-based visual positioning system is capable of locating a moving target with high precision: relative errors for positional parameters are all smaller than 10%; relative errors for linear velocities (vx, vy) are also kept to an acceptable level, i.e., lower than 20%. This presents the outstanding potential of this visual positioning system to assist in the industry of automation, including wireless intelligent control, high-precision indoor positioning, and navigation